Learning from Timnit Gebru

A blog post reflecting on the work of Timnit Gebru.
Author

Julia Fairbank

Published

April 18, 2023

About

Timnit Gebru is a well-known computer scientist and AI researcher, with the majority of her work focusing on bias and equity. Gebru received her bachelor’s degree in computer engineering from the University of Addis Ababa, her master’s degree in electrical engineering from Stanford University, and a PhD in computer vision from the University of Washington.

Gebru has worked at various tech companies, including Apple, Microsoft, and Google, where she worked on creating space for underrepresented groups in AI. In 2020, after a company dispute over a research paper identifying the risks of large language models, Gebru was fired from her position as a co-lead of Google’s Ethical AI team. Her removal from Google sparked widespread debate about the inherent sexism and racism in the tech industry and highlighted the importance of considering the role of ethics in AI development.

Middlebury College is lucky to be joined by Timnit Gebru on Monday, April 24th for a virtual Zoom conversation discussing bias and social impacts of artificial intelligence.

FATE/CV 2020 | Computer vision in practice: who is benefiting and who is being harmed?

This talk discusses the importance of diversity and fair representation in the field of AI. As AI is a reflection of the programmers who developed it and will have a global impact, there is an immediate need for AI systems to be developed by a diverse set of programmers that are focused creating ethical tools that will benefit the human race, and not just be profitable. As AI spreads into the world and different industries, there are are benefits and dangers that will arise throughout. When our healthcare systems, police units, and schools use AI, the technology has the capacity to either support underrepesented groups or continue suppressing them. It is crucial for developers to identify the different biases could affect the AI systems, and ensure that these systems are trained on unbiased and well-represented data. We must hold these developers accountable for the development of these programs.

tl;dr: If you train an AI on biased data, it will give you biased results; our developers are responsible for developing and training AI on fair, diverse data such that inherent biases do not get amplified by AI systems.

Question for Timnit Gebru

Do you think the changes necessary to address issues of bias and fairness in AI systems can be made the developers alone or will there need to be structural/managerial changes to the heads of the leading technology companies and/or their investment teams? If the latter, what could cause the incentive for executives to shift from profitability and towards ethical development and human wellbeing?

Discussion Reflection

Dr. Gebru began by discussing her foundation, DAIR, and their mission of both mitigating the harms of current AI systems along with imagining & executing their own, different technological future. She used this mission statement to argue the big tech leaders are exclusively pouring their resources and time into problems that they are interested in, rather than problems the world needs addressing. She used Karen Hao’s tweets to argue that technology, among basic human resouces, has never been distributed equally, so not only is technology being developed in line with the interest of some of the largest companies in the world, but the technology that is developed will still only be reachable by those with an abudance of resources. She uses the tweets of some big technology executives to contrast this perspective and highlight how these leaders claim this technology changes the world and has the potential to create an unimaginable uptopia and condemns this perspective, arguing that this is not a utopia that is wanted by or benefits all.

Gebru also discusses eugenics and the relationship it has to artifical general intelligence. She analyzes the term AGI deeply, trying to figure out exactly what it is, concluding that there isn’t a single, clear, definite definition. She continues on to give background on first and second wave eugenics, then goes through a few different philosophies like effective altruism and transhumanism, condemning them all. Dr. Gebru concludes the presentation with discussing the AGI apocapolse and the biases the AGI has the ability to perpetutate unless actively fought against.

I personally had a very tough time following this presentation. I had to watch the recording a few different times to understand the logic of her arguments on account of how many different topics were brought up out of context. I thought this presentation was very extreme and was disappointed that her argument was rooted in emotion and political opinions, using tweets as the main source of data, rather than data or logical reasoning. I believe there is a lot to be said about the future of AI and the ethics behind it (I did my senior philosophy thesis on it) and was incredibly shocked to find that this presentation was more of an attack on the individuals of the side rather than an explanation of her research and beliefs. I was especially disappointed with her response to the first question, which contributed to my opinion that she substainially lacked any clear defense of her argument in this presentation, instead building her talking points on the tweets of her opponents. Dr. Gebru has a very fascinating background and impressive experience and I wish she had relied more on her research and education to build and support her argument.

Reflect on the Process

I had a challenging time with this interaction. I think Dr. Gebru’s work is incredibly interesting and important to the current state of our world, however, I think Dr. Gebru failed to capture her audience and explain her work in a reliable way. I’m not sure how my thoughts on this discussion contribute to my feelings and opinions toward these issues as a whole but I know that I’m not satisified with the discussion or Gebru’s interactions with undergraduate students.